AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Complex task processing

# Complex task processing

Deepseek R1 0528 AWQ
MIT
The 4-bit AWQ quantized version of the DeepSeek-R1-0528 671B model, suitable for use on high-end GPU nodes
Large Language Model Transformers
D
adamo1139
161
2
Community Request 01 12B
A pre-trained language model merged from multiple Captain-Eris series models using the mergekit tool
Large Language Model Transformers
C
Nitral-AI
19
3
Badger Lambda Llama 3 8b
Badger is a Llama3 8B instruction model generated through recursive maximum pairwise disjoint normalized denoising Fourier interpolation method, incorporating features from multiple high-quality models.
Large Language Model Transformers
B
maldv
24
11
Tulu 65b
Tulu 65B is a 65B-parameter LLaMa model fine-tuned on multi-instruction datasets, representing the outcome of open-resource instruction tuning research with robust comprehensive performance.
Large Language Model Transformers English
T
allenai
20
21
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase